4 research outputs found

    Optimising lower layers of the protocol stack to improve communication performance in a wireless temperature sensor network

    Get PDF
    The function of wireless sensor networks is to monitor events or gather information and report the information to a sink node, a central location or a base station. It is a requirement that the information is transmitted through the network efficiently. Wireless communication is the main activity that consumes energy in wireless sensor networks through idle listening, overhearing, interference and collision. It becomes essential to limit energy usage while maintaining communication between the sensor nodes and the sink node as the nodes die after the battery has been exhausted. Thus, conserving energy in a wireless sensor network is of utmost importance. Numerous methods to decrease energy expenditure and extend the lifetime of the network have been proposed. Researchers have devised methods to efficiently utilise the limited energy available for wireless sensor networks by optimising the design parameters and protocols. Cross-layer optimisation is an approach that has been employed to improve wireless communication. The essence of cross-layer scheme is to optimise the exchange and control of data between two or more layers to improve efficiency. The number of transmissions is therefore a vital element in evaluating overall energy usage. In this dissertation, a Markov Chain model was employed to analyse the tuning of two layers of the protocol stack, namely the Physical Layer (PHY) and Media Access Control layer (MAC), to find possible energy gains. The study was conducted utilising the IEEE 802.11 channel, SensorMAC (SMAC) and Slotted-Aloha (S-Aloha) medium access protocols in a star topology Wireless Temperature Sensor Network (WTSN). The research explored the prospective energy gains that could be realised through optimizing the Forward Error Correction (FEC) rate. Different Reed Solomon codes were analysed to explore the effect of protocol tuning on energy efficiency, namely transmission power, modulation method, and channel access. The case where no FEC code was used and analysed as the control condition. A MATLAB simulation model was used to identify the statistics of collisions, overall packets transmitted, as well as the total number of slots used during the transmission phase. The bit error probability results computed analytically were utilised in the simulation model to measure the probability of successful transmitting data in the physical layer. The analytical values and the simulation results were compared to corroborate the correctness of the models. The results indicate that energy gains can be accomplished by the suggested layer tuning approach.Electrical and Mining EngineeringM. Tech. (Electrical Engineering

    A fuzzy-logic based adaptive data rate scheme for energy-efficient LoRaWAN communication

    Get PDF
    Long RangeWide Area Network (LoRaWAN) technology is rapidly expanding as a technology with long distance connectivity, low power consumption, low data rates and a large number of end devices (EDs) that connect to the Internet of Things (IoT) network. Due to the heterogeneity of several applications with varying Quality of Service (QoS) requirements, energy is expended as the EDs communicate with applications. The LoRaWAN Adaptive Data Rate (ADR) manages the resource allocation to optimize energy efficiency. The performance of the ADR algorithm gradually deteriorates in dense networks and efforts have been made in various studies to improve the algorithm’s performance. In this paper, we propose a fuzzy-logic based adaptive data rate (FL-ADR) scheme for energy efficient LoRaWAN communication. The scheme is implemented on the network server (NS), which receives sensor data from the EDs via the gateway (GW) node and computes network parameters (such as the spreading factor and transmission power) to optimize the energy consumption of the EDs in the network. The performance of the algorithm is evaluated in ns-3 using a multi-gateway LoRa network with EDs sending data packets at various intervals. Our simulation results are analyzed and compared to the traditional ADR and the ns-3 ADR. The proposed FL-ADR outperforms the traditional ADR algorithm and the ns-3 ADR minimizing the interference rate and energy consumption.In part by TelkomSA.https://www.mdpi.com/journal/jsanam2023Electrical, Electronic and Computer Engineerin

    A Fuzzy-Logic Based Adaptive Data Rate Scheme for Energy-Efficient LoRaWAN Communication

    No full text
    Long Range Wide Area Network (LoRaWAN) technology is rapidly expanding as a technology with long distance connectivity, low power consumption, low data rates and a large number of end devices (EDs) that connect to the Internet of Things (IoT) network. Due to the heterogeneity of several applications with varying Quality of Service (QoS) requirements, energy is expended as the EDs communicate with applications. The LoRaWAN Adaptive Data Rate (ADR) manages the resource allocation to optimize energy efficiency. The performance of the ADR algorithm gradually deteriorates in dense networks and efforts have been made in various studies to improve the algorithm’s performance. In this paper, we propose a fuzzy-logic based adaptive data rate (FL-ADR) scheme for energy efficient LoRaWAN communication. The scheme is implemented on the network server (NS), which receives sensor data from the EDs via the gateway (GW) node and computes network parameters (such as the spreading factor and transmission power) to optimize the energy consumption of the EDs in the network. The performance of the algorithm is evaluated in ns-3 using a multi-gateway LoRa network with EDs sending data packets at various intervals. Our simulation results are analyzed and compared to the traditional ADR and the ns-3 ADR. The proposed FL-ADR outperforms the traditional ADR algorithm and the ns-3 ADR minimizing the interference rate and energy consumption

    A survey on adaptive data rate optimization in LoRaWAN : recent solutions and major challenges

    No full text
    Long-Range Wide Area Network (LoRaWAN) is a fast-growing communication system for Low Power Wide Area Networks (LPWAN) in the Internet of Things (IoTs) deployments. LoRaWAN is built to optimize LPWANs for battery lifetime, capacity, range, and cost. LoRaWAN employs an Adaptive Data Rate (ADR) scheme that dynamically optimizes data rate, airtime, and energy consumption. The major challenge in LoRaWAN is that the LoRa specification does not state how the network server must command end nodes pertaining rate adaptation. As a result, numerous ADR schemes have been proposed to cater for the many applications of IoT technology, the quality of service requirements, di erent metrics, and radio frequency (RF) conditions. This o ers a challenge for the reliability and suitability of these schemes. This paper presents a comprehensive review of the research on ADR algorithms for LoRaWAN technology. First, we provide an overview of LoRaWAN network performance that has been explored and documented in the literature and then focus on recent solutions for ADR as an optimization approach to improve throughput, energy e ciency and scalability. We then distinguish the approaches used, highlight their strengths and drawbacks, and provide a comparison of these approaches. Finally, we identify some research gaps and future directions.http://www.mdpi.com/journal/sensorsam2021Electrical, Electronic and Computer Engineerin
    corecore